strong solution
Reflected diffusion models adapt to low-dimensional data
Holk, Asbjørn, Strauch, Claudia, Trottner, Lukas
While the mathematical foundations of score-based generative models are increasingly well understood for unconstrained Euclidean spaces, many practical applications involve data restricted to bounded domains. This paper provides a statistical analysis of reflected diffusion models on the hypercube $[0,1]^D$ for target distributions supported on $d$-dimensional linear subspaces. A primary challenge in this setting is the absence of Gaussian transition kernels, which play a central role in standard theory in $\mathbb{R}^D$. By employing an easily implementable infinite series expansion of the transition densities, we develop analytic tools to bound the score function and its approximation by sparse ReLU networks. For target densities with Sobolev smoothness $α$, we establish a convergence rate in the $1$-Wasserstein distance of order $n^{-\frac{α+1-δ}{2α+d}}$ for arbitrarily small $δ> 0$, demonstrating that the generative algorithm fully adapts to the intrinsic dimension $d$. These results confirm that the presence of reflecting boundaries does not degrade the fundamental statistical efficiency of the diffusion paradigm, matching the almost optimal rates known for unconstrained settings.
- Europe > Austria > Vienna (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > New York (0.04)
- (3 more...)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > Canada (0.04)
- (3 more...)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > Canada (0.04)
- (3 more...)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (4 more...)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (4 more...)
Quantitative Propagation of Chaos for SGD in Wide Neural Networks S
Mean field approximation and propagation of chaos for mSGLD . . . . . . . . . . 4 S3 T echnical results 4 S4 Quantitative propagation of chaos 8 S4.1 Existence of strong solutions to the particle SDE . . . . . . . . . . . . . . . . . . If F = R, then we simply note C( E). S2.1 Presentation of the modified SGLD and its continuous counterpart The proof is postponed to Section S4.4 Consider now the mean-field SDE starting from a random variable W The proof is postponed to Section S4.4 Then, there exists L 0 such that the following hold. In what follows, we bound separately the two terms in the right-hand side.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- (4 more...)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- North America > Canada (0.04)
- (3 more...)
Learning to steer with Brownian noise
Ankirchner, Stefan, Christensen, Sören, Kallsen, Jan, Borne, Philip Le, Perko, Stefan
The modern theory of stochastic control typically assumes complete knowledge of the underlying system dynamics. While significant theoretical advancements have been made in this area, see Øksendal and Sulem 2019; Fleming and Soner 2006, the practical application of stochastic control often faces challenges when the system model is uncertain or unknown. In recent years, Reinforcement learning (RL) has emerged as a promising approach to address this issue, enabling agents to learn optimal control policies through trial-and-error interactions with the environment. However, RL's success often hinges on the availability of vast amounts of data, and the learned control policies can be difficult to interpret, especially when deep learning techniques are employed, see Sutton 2018. To bridge the gap between fully model-based and model-free approaches, research has increasingly focused on model-based reinforcement learning.
- North America > United States > New York (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)